Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.967
Filtrar
1.
Sci Rep ; 14(1): 11054, 2024 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-38744976

RESUMO

Brain machine interfaces (BMIs) can substantially improve the quality of life of elderly or disabled people. However, performing complex action sequences with a BMI system is onerous because it requires issuing commands sequentially. Fundamentally different from this, we have designed a BMI system that reads out mental planning activity and issues commands in a proactive manner. To demonstrate this, we recorded brain activity from freely-moving monkeys performing an instructed task and decoded it with an energy-efficient, small and mobile field-programmable gate array hardware decoder triggering real-time action execution on smart devices. Core of this is an adaptive decoding algorithm that can compensate for the day-by-day neuronal signal fluctuations with minimal re-calibration effort. We show that open-loop planning-ahead control is possible using signals from primary and pre-motor areas leading to significant time-gain in the execution of action sequences. This novel approach provides, thus, a stepping stone towards improved and more humane control of different smart environments with mobile brain machine interfaces.


Assuntos
Algoritmos , Interfaces Cérebro-Computador , Animais , Encéfalo/fisiologia , Macaca mulatta
2.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732846

RESUMO

Brain-computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain's status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, ß, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Gestos , Humanos , Eletroencefalografia/métodos , Face/fisiologia , Algoritmos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Encéfalo/fisiologia , Masculino
3.
Comput Biol Med ; 175: 108504, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38701593

RESUMO

Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives: average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https://github.com/Ma-Xinzhi/EEG-TransNet.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Redes Neurais de Computação , Humanos , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Imaginação/fisiologia , Aprendizado Profundo
4.
J Neural Eng ; 21(3)2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38639058

RESUMO

Objective.Brain-computer interface (BCI) systems with large directly accessible instruction sets are one of the difficulties in BCI research. Research to achieve high target resolution (⩾100) has not yet entered a rapid development stage, which contradicts the application requirements. Steady-state visual evoked potential (SSVEP) based BCIs have an advantage in terms of the number of targets, but the competitive mechanism between the target stimulus and its neighboring stimuli is a key challenge that prevents the target resolution from being improved significantly.Approach.In this paper, we reverse the competitive mechanism and propose a frequency spatial multiplexing method to produce more targets with limited frequencies. In the proposed paradigm, we replicated each flicker stimulus as a 2 × 2 matrix and arrange the matrices of all frequencies in a tiled fashion to form the interaction interface. With different arrangements, we designed and tested three example paradigms with different layouts. Further we designed a graph neural network that distinguishes between targets of the same frequency by recognizing the different electroencephalography (EEG) response distribution patterns evoked by each target and its neighboring targets.Main results.Extensive experiment studies employing eleven subjects have been performed to verify the validity of the proposed method. The average classification accuracies in the offline validation experiments for the three paradigms are 89.16%, 91.38%, and 87.90%, with information transfer rates (ITR) of 51.66, 53.96, and 50.55 bits/min, respectively.Significance.This study utilized the positional relationship between stimuli and did not circumvent the competing response problem. Therefore, other state-of-the-art methods focusing on enhancing the efficiency of SSVEP detection can be used as a basis for the present method to achieve very promising improvements.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Potenciais Evocados Visuais , Estimulação Luminosa , Humanos , Potenciais Evocados Visuais/fisiologia , Eletroencefalografia/métodos , Masculino , Estimulação Luminosa/métodos , Feminino , Adulto , Adulto Jovem , Algoritmos
5.
J Neural Eng ; 21(3)2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38648782

RESUMO

Objective.Brain-computer interfaces (BCIs) have the potential to reinstate lost communication faculties. Results from speech decoding studies indicate that a usable speech BCI based on activity in the sensorimotor cortex (SMC) can be achieved using subdurally implanted electrodes. However, the optimal characteristics for a successful speech implant are largely unknown. We address this topic in a high field blood oxygenation level dependent functional magnetic resonance imaging (fMRI) study, by assessing the decodability of spoken words as a function of hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal-axis.Approach.Twelve subjects conducted a 7T fMRI experiment in which they pronounced 6 different pseudo-words over 6 runs. We divided the SMC by hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal axis. Classification was performed on in these SMC areas using multiclass support vector machine (SVM).Main results.Significant classification was possible from the SMC, but no preference for the left or right hemisphere, nor for the precentral or postcentral gyrus for optimal word classification was detected. Classification while using information from the cortical surface was slightly better than when using information from deep in the central sulcus and was highest within the ventral 50% of SMC. Confusion matrices where highly similar across the entire SMC. An SVM-searchlight analysis revealed significant classification in the superior temporal gyrus and left planum temporale in addition to the SMC.Significance.The current results support a unilateral implant using surface electrodes, covering the ventral 50% of the SMC. The added value of depth electrodes is unclear. We did not observe evidence for variations in the qualitative nature of information across SMC. The current results need to be confirmed in paralyzed patients performing attempted speech.


Assuntos
Interfaces Cérebro-Computador , Imageamento por Ressonância Magnética , Fala , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto , Feminino , Fala/fisiologia , Adulto Jovem , Eletrodos Implantados , Mapeamento Encefálico/métodos
6.
Sci Rep ; 14(1): 9281, 2024 04 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654008

RESUMO

Steady-state visual evoked potentials (SSVEP) are electroencephalographic signals elicited when the brain is exposed to a visual stimulus with a steady frequency. We analyzed the temporal dynamics of SSVEP during sustained flicker stimulation at 5, 10, 15, 20 and 40 Hz. We found that the amplitudes of the responses were not stable over time. For a 5 Hz stimulus, the responses progressively increased, while, for higher flicker frequencies, the amplitude increased during the first few seconds and often showed a continuous decline afterward. We hypothesize that these two distinct sets of frequency-dependent SSVEP signal properties reflect the contribution of parvocellular and magnocellular visual pathways generating sustained and transient responses, respectively. These results may have important applications for SSVEP signals used in research and brain-computer interface technology and may contribute to a better understanding of the frequency-dependent temporal mechanisms involved in the processing of prolonged periodic visual stimuli.


Assuntos
Eletroencefalografia , Potenciais Evocados Visuais , Estimulação Luminosa , Potenciais Evocados Visuais/fisiologia , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Interfaces Cérebro-Computador , Córtex Visual/fisiologia
7.
J Neural Eng ; 21(2)2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38579696

RESUMO

Objective.Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g. Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g. C and C++).Approach.To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termednodes, which communicate with each other in agraphvia streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis, an in-memory database, to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes.Main results.In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1 ms chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 ms of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial (ClinicalTrials.gov Identifier: NCT00912041) performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems.Significance.By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.


Assuntos
Interfaces Cérebro-Computador , Neurociências , Humanos , Redes Neurais de Computação
8.
J Neurosci Methods ; 406: 110132, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38604523

RESUMO

BACKGROUND: Traditional therapist-based rehabilitation training for patients with movement impairment is laborious and expensive. In order to reduce the cost and improve the treatment effect of rehabilitation, many methods based on human-computer interaction (HCI) technology have been proposed, such as robot-assisted therapy and functional electrical stimulation (FES). However, due to the lack of active participation of brain, these methods have limited effects on the promotion of damaged nerve remodeling. NEW METHOD: Based on the neurofeedback training provided by the combination of brain-computer interface (BCI) and exoskeleton, this paper proposes a multimodal brain-controlled active rehabilitation system to help improve limb function. The joint control mode of steady-state visual evoked potential (SSVEP) and motor imagery (MI) is adopted to achieve self-paced control and thus maximize the degree of brain involvement, and a requirement selection function based on SSVEP design is added to facilitate communication with aphasia patients. COMPARISON WITH EXISTING METHODS: In addition, the Transformer is introduced as the MI decoder in the asynchronous online BCI to improve the global perception of electroencephalogram (EEG) signals and maintain the sensitivity and efficiency of the system. RESULTS: In two multi-task online experiments for left hand, right hand, foot and idle states, subject achieves 91.25% and 92.50% best accuracy, respectively. CONCLUSION: Compared with previous studies, this paper aims to establish a high-performance and low-latency brain-controlled rehabilitation system, and provide an independent and autonomous control mode of the brain, so as to improve the effect of neural remodeling. The performance of the proposed method is evaluated through offline and online experiments.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Exoesqueleto Energizado , Neurorretroalimentação , Humanos , Eletroencefalografia/métodos , Masculino , Neurorretroalimentação/métodos , Neurorretroalimentação/instrumentação , Potenciais Evocados Visuais/fisiologia , Adulto , Encéfalo/fisiologia , Encéfalo/fisiopatologia , Feminino , Adulto Jovem , Imaginação/fisiologia , Imagens, Psicoterapia/métodos
9.
Sci Adv ; 10(15): eadm8246, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38608024

RESUMO

Temporally coordinated neural activity is central to nervous system function and purposeful behavior. Still, there is a paucity of evidence demonstrating how this coordinated activity within cortical and subcortical regions governs behavior. We investigated this between the primary motor (M1) and contralateral cerebellar cortex as rats learned a neuroprosthetic/brain-machine interface (BMI) task. In neuroprosthetic task, actuator movements are causally linked to M1 "direct" neurons that drive the decoder for successful task execution. However, it is unknown how task-related M1 activity interacts with the cerebellum. We observed a notable 3 to 6 hertz coherence that emerged between these regions' local field potentials (LFPs) with learning that also modulated task-related spiking. We identified robust task-related indirect modulation in the cerebellum, which developed a preferential relationship with M1 task-related activity. Inhibiting cerebellar cortical and deep nuclei activity through optogenetics led to performance impairments in M1-driven neuroprosthetic control. Together, these results demonstrate that cerebellar influence is necessary for M1-driven neuroprosthetic control.


Assuntos
Interfaces Cérebro-Computador , Cerebelo , Animais , Ratos , Núcleo Celular , Aprendizagem , Movimento
10.
Sensors (Basel) ; 24(7)2024 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-38610540

RESUMO

In the field of neuroscience, brain-computer interfaces (BCIs) are used to connect the human brain with external devices, providing insights into the neural mechanisms underlying cognitive processes, including aesthetic perception. Non-invasive BCIs, such as EEG and fNIRS, are critical for studying central nervous system activity and understanding how individuals with cognitive deficits process and respond to aesthetic stimuli. This study assessed twenty participants who were divided into control and impaired aging (AI) groups based on MMSE scores. EEG and fNIRS were used to measure their neurophysiological responses to aesthetic stimuli that varied in pleasantness and dynamism. Significant differences were identified between the groups in P300 amplitude and late positive potential (LPP), with controls showing greater reactivity. AI subjects showed an increase in oxyhemoglobin in response to pleasurable stimuli, suggesting hemodynamic compensation. This study highlights the effectiveness of multimodal BCIs in identifying the neural basis of aesthetic appreciation and impaired aging. Despite its limitations, such as sample size and the subjective nature of aesthetic appreciation, this research lays the groundwork for cognitive rehabilitation tailored to aesthetic perception, improving the comprehension of cognitive disorders through integrated BCI methodologies.


Assuntos
Interfaces Cérebro-Computador , Humanos , Envelhecimento , Encéfalo , Estética , Percepção
11.
Sci Rep ; 14(1): 9221, 2024 04 22.
Artigo em Inglês | MEDLINE | ID: mdl-38649681

RESUMO

Technological advances in head-mounted displays (HMDs) facilitate the acquisition of physiological data of the user, such as gaze, pupil size, or heart rate. Still, interactions with such systems can be prone to errors, including unintended behavior or unexpected changes in the presented virtual environments. In this study, we investigated if multimodal physiological data can be used to decode error processing, which has been studied, to date, with brain signals only. We examined the feasibility of decoding errors solely with pupil size data and proposed a hybrid decoding approach combining electroencephalographic (EEG) and pupillometric signals. Moreover, we analyzed if hybrid approaches can improve existing EEG-based classification approaches and focused on setups that offer increased usability for practical applications, such as the presented game-like virtual reality flight simulation. Our results indicate that classifiers trained with pupil size data can decode errors above chance. Moreover, hybrid approaches yielded improved performance compared to EEG-based decoders in setups with a reduced number of channels, which is crucial for many out-of-the-lab scenarios. These findings contribute to the development of hybrid brain-computer interfaces, particularly in combination with wearable devices, which allow for easy acquisition of additional physiological data.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Pupila , Realidade Virtual , Humanos , Eletroencefalografia/métodos , Adulto , Masculino , Pupila/fisiologia , Feminino , Adulto Jovem , Simulação por Computador , Encéfalo/fisiologia , Frequência Cardíaca/fisiologia
12.
J Neural Eng ; 21(2)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38626760

RESUMO

Objective. In recent years, electroencephalogram (EEG)-based brain-computer interfaces (BCIs) applied to inner speech classification have gathered attention for their potential to provide a communication channel for individuals with speech disabilities. However, existing methodologies for this task fall short in achieving acceptable accuracy for real-life implementation. This paper concentrated on exploring the possibility of using inter-trial coherence (ITC) as a feature extraction technique to enhance inner speech classification accuracy in EEG-based BCIs.Approach. To address the objective, this work presents a novel methodology that employs ITC for feature extraction within a complex Morlet time-frequency representation. The study involves a dataset comprising EEG recordings of four different words for ten subjects, with three recording sessions per subject. The extracted features are then classified using k-nearest-neighbors (kNNs) and support vector machine (SVM).Main results. The average classification accuracy achieved using the proposed methodology is 56.08% for kNN and 59.55% for SVM. These results demonstrate comparable or superior performance in comparison to previous works. The exploration of inter-trial phase coherence as a feature extraction technique proves promising for enhancing accuracy in inner speech classification within EEG-based BCIs.Significance. This study contributes to the advancement of EEG-based BCIs for inner speech classification by introducing a feature extraction methodology using ITC. The obtained results, on par or superior to previous works, highlight the potential significance of this approach in improving the accuracy of BCI systems. The exploration of this technique lays the groundwork for further research toward inner speech decoding.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Fala , Humanos , Eletroencefalografia/métodos , Eletroencefalografia/classificação , Masculino , Fala/fisiologia , Feminino , Adulto , Máquina de Vetores de Suporte , Adulto Jovem , Reprodutibilidade dos Testes , Algoritmos
13.
Comput Biol Med ; 174: 108445, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38603901

RESUMO

Transfer learning (TL) has demonstrated its efficacy in addressing the cross-subject domain adaptation challenges in affective brain-computer interfaces (aBCI). However, previous TL methods usually use a stationary distance, such as Euclidean distance, to quantify the distribution dissimilarity between two domains, overlooking the inherent links among similar samples, potentially leading to suboptimal feature mapping. In this study, we introduced a novel algorithm called multi-source manifold metric transfer learning (MSMMTL) to enhance the efficacy of conventional TL. Specifically, we first selected the source domain based on Mahalanobis distance to enhance the quality of the source domains and then used manifold feature mapping approach to map the source and target domains on the Grassmann manifold to mitigate data drift between domains. In this newly established shared space, we optimized the Mahalanobis metric by maximizing the inter-class distances while minimizing the intra-class distances in the target domain. Recognizing that significant distribution discrepancies might persist across different domains even on the manifold, to ensure similar distributions between the source and target domains, we further imposed constraints on both domains under the Mahalanobis metric. This approach aims to reduce distributional disparities and enhance the electroencephalogram (EEG) emotion recognition performance. In cross-subject experiments, the MSMMTL model exhibits average classification accuracies of 88.83 % and 65.04 % for SEED and DEAP, respectively, underscoring the superiority of our proposed MSMMTL over other state-of-the-art methods. MSMMTL can effectively solve the problem of individual differences in EEG-based affective computing.


Assuntos
Algoritmos , Interfaces Cérebro-Computador , Eletroencefalografia , Emoções , Aprendizado de Máquina , Humanos , Eletroencefalografia/métodos , Emoções/fisiologia , Processamento de Sinais Assistido por Computador , Masculino , Encéfalo/fisiologia , Feminino
14.
Artigo em Inglês | MEDLINE | ID: mdl-38619940

RESUMO

Affective brain-computer interfaces (aBCIs) have garnered widespread applications, with remarkable advancements in utilizing electroencephalogram (EEG) technology for emotion recognition. However, the time-consuming process of annotating EEG data, inherent individual differences, non-stationary characteristics of EEG data, and noise artifacts in EEG data collection pose formidable challenges in developing subject-specific cross-session emotion recognition models. To simultaneously address these challenges, we propose a unified pre-training framework based on multi-scale masked autoencoders (MSMAE), which utilizes large-scale unlabeled EEG signals from multiple subjects and sessions to extract noise-robust, subject-invariant, and temporal-invariant features. We subsequently fine-tune the obtained generalized features with only a small amount of labeled data from a specific subject for personalization and enable cross-session emotion recognition. Our framework emphasizes: 1) Multi-scale representation to capture diverse aspects of EEG signals, obtaining comprehensive information; 2) An improved masking mechanism for robust channel-level representation learning, addressing missing channel issues while preserving inter-channel relationships; and 3) Invariance learning for regional correlations in spatial-level representation, minimizing inter-subject and inter-session variances. Under these elaborate designs, the proposed MSMAE exhibits a remarkable ability to decode emotional states from a different session of EEG data during the testing phase. Extensive experiments conducted on the two publicly available datasets, i.e., SEED and SEED-IV, demonstrate that the proposed MSMAE consistently achieves stable results and outperforms competitive baseline methods in cross-session emotion recognition.


Assuntos
Algoritmos , Interfaces Cérebro-Computador , Eletroencefalografia , Emoções , Humanos , Emoções/fisiologia , Eletroencefalografia/métodos , Feminino , Masculino , Aprendizado de Máquina , Artefatos , Adulto , Redes Neurais de Computação
15.
Artigo em Inglês | MEDLINE | ID: mdl-38625770

RESUMO

This study embarks on a comprehensive investigation of the effectiveness of repetitive transcranial direct current stimulation (tDCS)-based neuromodulation in augmenting steady-state visual evoked potential (SSVEP) brain-computer interfaces (BCIs), alongside exploring pertinent electroencephalography (EEG) biomarkers for assessing brain states and evaluating tDCS efficacy. EEG data were garnered across three distinct task modes (eyes open, eyes closed, and SSVEP stimulation) and two neuromodulation patterns (sham-tDCS and anodal-tDCS). Brain arousal and brain functional connectivity were measured by extracting features of fractal EEG and information flow gain, respectively. Anodal-tDCS led to diminished offsets and enhanced information flow gains, indicating improvements in both brain arousal and brain information transmission capacity. Additionally, anodal-tDCS markedly enhanced SSVEP-BCIs performance as evidenced by increased amplitudes and accuracies, whereas sham-tDCS exhibited lesser efficacy. This study proffers invaluable insights into the application of neuromodulation methods for bolstering BCI performance, and concurrently authenticates two potent electrophysiological markers for multifaceted characterization of brain states.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Potenciais Evocados Visuais , Fractais , Estimulação Transcraniana por Corrente Contínua , Humanos , Estimulação Transcraniana por Corrente Contínua/métodos , Potenciais Evocados Visuais/fisiologia , Masculino , Adulto , Feminino , Adulto Jovem , Nível de Alerta/fisiologia , Encéfalo/fisiologia , Voluntários Saudáveis , Algoritmos
16.
Artigo em Inglês | MEDLINE | ID: mdl-38578854

RESUMO

Predicting the potential for recovery of motor function in stroke patients who undergo specific rehabilitation treatments is an important and major challenge. Recently, electroencephalography (EEG) has shown potential in helping to determine the relationship between cortical neural activity and motor recovery. EEG recorded in different states could more accurately predict motor recovery than single-state recordings. Here, we design a multi-state (combining eyes closed, EC, and eyes open, EO) fusion neural network for predicting the motor recovery of patients with stroke after EEG-brain-computer-interface (BCI) rehabilitation training and use an explainable deep learning method to identify the most important features of EEG power spectral density and functional connectivity contributing to prediction. The prediction accuracy of the multi-states fusion network was 82%, significantly improved compared with a single-state model. The neural network explanation result demonstrated the important region and frequency oscillation bands. Specifically, in those two states, power spectral density and functional connectivity were shown as the regions and bands related to motor recovery in frontal, central, and occipital. Moreover, the motor recovery relation in bands, the power spectrum density shows the bands at delta and alpha bands. The functional connectivity shows the delta, theta, and alpha bands in the EC state; delta, theta, and beta mid at the EO state are related to motor recovery. Multi-state fusion neural networks, which combine multiple states of EEG signals into a single network, can increase the accuracy of predicting motor recovery after BCI training, and reveal the underlying mechanisms of motor recovery in brain activity.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Reabilitação do Acidente Vascular Cerebral , Acidente Vascular Cerebral , Humanos , Eletroencefalografia/métodos , Reabilitação do Acidente Vascular Cerebral/métodos
17.
Artigo em Inglês | MEDLINE | ID: mdl-38598402

RESUMO

Canonical correlation analysis (CCA), Multivariate synchronization index (MSI), and their extended methods have been widely used for target recognition in Brain-computer interfaces (BCIs) based on Steady State Visual Evoked Potentials (SSVEP), and covariance calculation is an important process for these algorithms. Some studies have proved that embedding time-local information into the covariance can optimize the recognition effect of the above algorithms. However, the optimization effect can only be observed from the recognition results and the improvement principle of time-local information cannot be explained. Therefore, we propose a time-local weighted transformation (TT) recognition framework that directly embeds the time-local information into the electroencephalography signal through weighted transformation. The influence mechanism of time-local information on the SSVEP signal can then be observed in the frequency domain. Low-frequency noise is suppressed on the premise of sacrificing part of the SSVEP fundamental frequency energy, the harmonic energy of SSVEP is enhanced at the cost of introducing a small amount of high-frequency noise. The experimental results show that the TT recognition framework can significantly improve the recognition ability of the algorithms and the separability of extracted features. Its enhancement effect is significantly better than the traditional time-local covariance extraction method, which has enormous application potential.


Assuntos
Interfaces Cérebro-Computador , Humanos , Potenciais Evocados Visuais , Reconhecimento Automatizado de Padrão/métodos , Reconhecimento Psicológico , Eletroencefalografia/métodos , Algoritmos , Estimulação Luminosa
18.
Artigo em Inglês | MEDLINE | ID: mdl-38598403

RESUMO

Steady-state visual evoked potential (SSVEP), one of the most popular electroencephalography (EEG)-based brain-computer interface (BCI) paradigms, can achieve high performance using calibration-based recognition algorithms. As calibration-based recognition algorithms are time-consuming to collect calibration data, the least-squares transformation (LST) has been used to reduce the calibration effort for SSVEP-based BCI. However, the transformation matrices constructed by current LST methods are not precise enough, resulting in large differences between the transformed data and the real data of the target subject. This ultimately leads to the constructed spatial filters and reference templates not being effective enough. To address these issues, this paper proposes multi-stimulus LST with online adaptation scheme (ms-LST-OA). METHODS: The proposed ms-LST-OA consists of two parts. Firstly, to improve the precision of the transformation matrices, we propose the multi-stimulus LST (ms-LST) using cross-stimulus learning scheme as the cross-subject data transformation method. The ms-LST uses the data from neighboring stimuli to construct a higher precision transformation matrix for each stimulus to reduce the differences between transformed data and real data. Secondly, to further optimize the constructed spatial filters and reference templates, we use an online adaptation scheme to learn more features of the EEG signals of the target subject through an iterative process trial-by-trial. RESULTS: ms-LST-OA performance was measured for three datasets (Benchmark Dataset, BETA Dataset, and UCSD Dataset). Using few calibration data, the ITR of ms-LST-OA achieved 210.01±10.10 bits/min, 172.31±7.26 bits/min, and 139.04±14.90 bits/min for all three datasets, respectively. CONCLUSION: Using ms-LST-OA can reduce calibration effort for SSVEP-based BCIs.


Assuntos
Interfaces Cérebro-Computador , Potenciais Evocados Visuais , Humanos , Calibragem , Estimulação Luminosa/métodos , Eletroencefalografia/métodos , Algoritmos
19.
J Integr Neurosci ; 23(4): 73, 2024 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-38682224

RESUMO

BACKGROUND: To enhance the information transfer rate (ITR) of a steady-state visual evoked potential (SSVEP)-based speller, more characters with flickering symbols should be used. Increasing the number of symbols might reduce the classification accuracy. A hybrid brain-computer interface (BCI) improves the overall performance of a BCI system by taking advantage of two or more control signals. In a simultaneous hybrid BCI, various modalities work with each other simultaneously, which enhances the ITR. METHODS: In our proposed speller, simultaneous combination of electromyogram (EMG) and SSVEP was applied to increase the ITR. To achieve 36 characters, only nine stimulus symbols were used. Each symbol allowed the selection of four characters based on four states of muscle activity. The SSVEP detected which symbol the subject was focusing on and the EMG determined the target character out of the four characters dedicated to that symbol. The frequency rate for character encoding was applied in the EMG modality and latency was considered in the SSVEP modality. Online experiments were carried out on 10 healthy subjects. RESULTS: The average ITR of this hybrid system was 96.1 bit/min with an accuracy of 91.2%. The speller speed was 20.9 char/min. Different subjects had various latency values. We used an average latency of 0.2 s across all subjects. Evaluation of each modality showed that the SSVEP classification accuracy varied for different subjects, ranging from 80% to 100%, while the EMG classification accuracy was approximately 100% for all subjects. CONCLUSIONS: Our proposed hybrid BCI speller showed improved system speed compared with state-of-the-art systems based on SSVEP or SSVEP-EMG, and can provide a user-friendly, practical system for speller applications.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Eletromiografia , Potenciais Evocados Visuais , Humanos , Potenciais Evocados Visuais/fisiologia , Adulto , Masculino , Eletroencefalografia/métodos , Feminino , Adulto Jovem , Encéfalo/fisiologia
20.
Artigo em Inglês | MEDLINE | ID: mdl-38648154

RESUMO

Machine learning has achieved great success in electroencephalogram (EEG) based brain-computer interfaces (BCIs). Most existing BCI studies focused on improving the decoding accuracy, with only a few considering the adversarial security. Although many adversarial defense approaches have been proposed in other application domains such as computer vision, previous research showed that their direct extensions to BCIs degrade the classification accuracy on benign samples. This phenomenon greatly affects the applicability of adversarial defense approaches to EEG-based BCIs. To mitigate this problem, we propose alignment-based adversarial training (ABAT), which performs EEG data alignment before adversarial training. Data alignment aligns EEG trials from different domains to reduce their distribution discrepancies, and adversarial training further robustifies the classification boundary. The integration of data alignment and adversarial training can make the trained EEG classifiers simultaneously more accurate and more robust. Experiments on five EEG datasets from two different BCI paradigms (motor imagery classification, and event related potential recognition), three convolutional neural network classifiers (EEGNet, ShallowCNN and DeepCNN) and three different experimental settings (offline within-subject cross-block/-session classification, online cross-session classification, and pre-trained classifiers) demonstrated its effectiveness. It is very intriguing that adversarial attacks, which are usually used to damage BCI systems, can be used in ABAT to simultaneously improve the model accuracy and robustness.


Assuntos
Algoritmos , Interfaces Cérebro-Computador , Eletroencefalografia , Imaginação , Aprendizado de Máquina , Redes Neurais de Computação , Eletroencefalografia/métodos , Humanos , Imaginação/fisiologia , Potenciais Evocados/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...